Moving Beyond Configuration Management to Application Lifecycle Management

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

What an accomplishment: thirty-three miners rescued in Chile after sixty-nine days of being trapped. If the technology was there to drill the escape route, and to design the capsule to bring them up, but the rest of the team wasn't integrated - success would have been difficult at best. There were psychiatrists and psychologists, medical experts and nutritionists, project managers, rescue workers, and overall co-ordination, in addition to the engineers. The whole team worked together for success.

In software, it's much the same. You can have an engineering team with great version control and even CM tools produce a great application. But if the verification team, the documentation team, the marketing team and the project management team aren't on board, the great application may never see the light of day. The whole team has to be involved. The application has a lifecycle, from conception to retirement, the application lifecycle, and it has to be managed.

Perhaps when there was less market pressure, or fewer applications and products with far fewer computers, it was fine to focus on software design and implementation. In those days of old, managing the new flexibility and capabilities afforded by software was the big challenge. But we've come a long way since then.

The First Generation
In the early days of software, the late '60s and early '70s, a new capability known as Version Control (VC), or Source Code Control (SCC) evolved on a number of different fronts including IBM (for their mainframe code), AT&T (for their Unix code and telecomm product code), and DEC (for their own PDP and VMS software).

I believe the IBM product was called update (or some variant thereof). It captured updates to be applied to a file, and so maintained a capability for retrieving files with any number of updates applied. Digital's product was known as CMS, for code management system. And AT&T's Unix tool Source Code Control System (SCCS). Perhaps some of you have been exposed to all of these. They were the forerunners of the current CM tools. SCCS in particular was very widespread in the Unix world and made its way over to VMS as well.

A number of other VC tools evolved over time: RCS, PVCS, CMVC, etc. They dealt primarily with 4 basic functions:

  • Save/retrieve revisions (i.e.,  versions) of source code
  • Creation of baselines
  • Checkout/check-in functions
  • Branching of source code files

In addition, the Make utility, and other similar capabilities evolved so that the code baseline, once retrieved from the version control database, could be built in a repeatable fashion. Other tools, such as diff and merge/diff3, evolved to support workspace management, including peer reviews of changes in your workspace, and merging parallel changes into your workspace.

This was the first generation of what soon came to be called (software) configuration management tools. The goal was modest and easily accomplished. Version control continued to evolve for many years with schemes to improve retrieval time when there were numerous revisions, capabilities to track checkouts, and the ability to add comments to each revision.Remember, there were command line interfaces, but no GUI interfaces at the time. You had to be a techie. You were interested in your edits - preserving them and getting the right revisions checked in. The big goal was to allow builds to be done from the version control system, without using any files from user's disks/directories.

From VC to CM, the 2nd Generation
Late in the '80s after a couple of decades of using the VC tools with various scripts, second generation tools started to take shape. Not very quickly, though. A few private systems, notably PLS at Bell-Northern Research, and SMS at Mitel Corporation, gave some of the direction for future systems, especially in terms of change packaging and change management. These were the forerunners of today's CM+ product (Neuma).

But one of the best liked systems of the time was called DSEE, the forerunner of ClearCase (IBM), made for a specific workstation named Apollo. The entire operating system allowed revisions of files to exist and to be referenced in "views" based on the configuration criteria. The idea here was, not just to be able to store baselines in the VC system, but to be able to expose a view of these files just as if they resided in the file system of the Apollo Domain platform. When Apollo was bought out by one of its competitors (HP), DSEE started to die. But a group of developers formed Atria Software to create ClearCase, a product capable of running on many Unix platforms.

In the early '90s, Caseware unveiled Amplify Control, the forerunner of today's Synergy product (IBM). In parallel, a company called Polytron published its PVCS software, version control for the PC, including OS2. After numerous acquisitions, Serena became the owner of the PVCS suite, now ported to Unix and OpenVMS.

During the '90s, these companies tried to differentiate themselves: PVCS with its strong PC base, ClearCase with its strong DSEE following and unique virtual object bases, continuous (synergy) with its flexibility, and CM+ with its full life-cycle capabilities. Each had severe handicaps to overcome through the '90s, while at the same time the concept of GUI was defining itself. From early to later Motif (Unix), from Windows 3.0 to Windows 95 (PC), and with ease-of-use becoming important, all of these products survived and with flying colors, ending the '90s with some momentum.

The signs of a second generation CM tool (no longer called a version control tool), became somewhat clar but also somewhat varied. The user interface went from CLI (command line interface) to both CLI and GUI. The GUI was strong in the delta and merge tools, weaker from a management perspective, with a preoccupation of making merging easier. A couple of tools (Continuos and CM+) actually had some decent management capabilities from the GUI.

Control went from checkout and parallel control management to various process control schemes, covering changes (updates/change packages), problem reports (defects/issues), usually implemented through scripting, triggers and file/data permissions.

The CM application suite widened from version control (VC, branching, delta, merge) and Make, to include change management with updates/change packages, build and release tracking, problem tracking, and even requirements management. The CM tool suite evolved as a combination of tools. In some cases, these were separate tools (database, version control, configuration management, change management, problem tracking, requirements management). In other cases they were all managed by a single tool on a single database. Most were somewhere in between. The big question: what's better?:  A consolidated monolithic tool, or the best of breed, glued-together applications, to provide some uniformity and traceability.

Other areas addressed by (second generation) CM included the ability to cope with multiple site development, the ability to serve the big development platforms (Unix, PC, Linux, and to some extent OpenVMS and mainframes), the ability to make tools more scalable, and some improvements in reporting, query and status accounting.

The end of the 2nd generation of CM tools defined software configuration management fairly clearly to some, but to others, "everything is CM". Most commercial tools today are primarily 2nd generation tools with some emerging 3rd generation capabilities.

From CM to ALM, the 3rd Generation
Early in the decade, the term ALM (Application Lifecycle Management) started gaining steam. Perhaps this evolved from the PLM (Product Lifecycle Management) of the hardware CM world. But, quite independently, software CM requirements evolved into ALM requirements.

Because CM was and is a backbone technology, safeguarding both assets and processes within a software intensive organization, it was natural to extend its application reach both earlier and later into the product lifecycle sphere of influence. As well as having the standard CM components, an ALM solution must address:

  • Version control (2G)
  • Build management (2G)
  • Change management (2G)
  • Configuration management (2G)
  • Problem tracking (2G)
  • Requirements management (2G)
  • Feature/task/activity/project management
  • Test case management
  • Test run management
  • Document management
  • Organization chart
  • Delivery management
  • Customer request tracking
  • Product management

Basically, the product must be managed from inception until its retirement. It must be able to manage the entire product road map, and the product family road map. What is ALM? If your solution contains all the information you need about your product, for planning, development, validation, delivery and support, then you have an ALM solution.

So why is ALM so difficult? It comes down to architecture. It might have been ok to link a change management tool or a problem tracking tool to a version control tool to create CM, but the solution was far from ideal. 2G CM solutions, for the most part (excluding monolithic solutions such as CM+), either focused on a narrower target (e.g., no requirements management or problem tracking), or ended up being administrative nightmares: too big, inflexible, and not scalable, for example.

Beside the functional requirements, the architectural requirements of an 3G ALM solution includes:

1) Low maintenance: you only need part time support to keep even very large ALM shops working. Backups are automated and reliable. Server outage is rare. System limits are rare and/or inconsequential. Scripting for special requirements is straightforward.

2) Easy and extensive customization: Customization is the same across all functions, requiring less training. Capability is strong so that the process can be tailored to the roles, and the information is readily available for the job function. Customization across process, data schema and user interface are performed quickly and at low cost.

3) Easy navigation, including traceability: Why was this line of code changed? Which problems are fixed by this release? If you can't point-and-click to get answers without waiting for them, your ALM solution will lose half its value. You can run meetings from your ALM solution.

4) Ease of use across lifecycle functions: If training is necessary for each tool, you'll be losing a lot of time on training courses. If the tool doesn't follow your process closely or the terminology doesn't fit, there is more confusion. If it’s slow, you're wasting money and instilling dislike for your solution.

5) Role-based architecture: What you see is based on roles. There are role-specific dashboards. Record state transitions are protected by role. Permissions are by role, not by file system permissions.

6) Common repository: Your queries can span all functions. Your repository and/or schema is designed to support engineering and management applications. Your backups are consistent.

7) Multiple site operation: You can work from any site, or move from site to site, as long as you are assigned the proper roles. You can restrict sensitive data from some sites if necessary. Your data is up to date at all sites. You can almost always recover from network outages automatically.8) Extensive process support: You have proper process guidance as part of the solution. You can dynamically modify the process as your requirements evolve. You can clearly and easily track progress.

9) Reliability and availability: Your solution is not prone to down time, whether for administration or because of problems. You have disaster recovery in place. Your backups are guaranteed to be consistent. You can deal efficiently with data errors or sabotage introduced several weeks back.

10) Longevity: As a backbone application, you can't be changing out your solution with every project the way you might be able to with CM alone. Your solution must be current, but must have the ability to evolve to provide support for decades, even if your vendor is bought out, as more than likely will happen.

Any of these seem like things you'd like to improve in your shop? If you're moving to an ALM solution, make sure you've got these things covered.

The 3 Fundamentals
What happens when you try to glue together the best of breed of all of the ALM functions? Pretty much all of these architectural requirements lose out. How can something be low maintenance when I need 10 different administrator training courses to handle each of the tools.  The same is true for customization. Go down the list. Yes, you can glue in traceability if you know the process and data ahead of time, and as long as the glue doesn't have to change whenever any one of the tools is upgraded. But there are three fundamentals of ALM that always must be used as a starting point:

1) A common repository for all management functions
2) A common process engine/capability across functions
3) A common user interface architecture

Notice any word that's in common across these fundamentals? Nobody disagrees with these fundamentals, but not everyone embraces them. Perhaps the VC engine doesn't need a database. The requirements tool has its own. And so forth.

There have been a couple of efforts to create ALM backplanes, reaching back into the early '90s. These have failed. The reason they have failed is that the premise is that existing tools can be plugged into backplanes. This is absolutely true, as long as all of the tools share a common repository, a common process engine and a common UI architecture. And no, SQL does not mean common repository, a common scripting language does not mean a common process engine, and KDE or .net do not mean a common UI architecture. Still a backplane can work if the framework includes the repository, process engine and user interface architecture. It's just that existing tools don't fit the mold. Now some vendors won't sign up to a framework because they want to sell their exclusive solution.

If you don't have commonality in these areas, how is your multiple site capability going to span all of the ALM functions? How are you going to improve ease of use if there are several different user interfaces and/or behaviors to learn? How are roles and traceability going to cut across all of the functions if you have differing process engines, repositories and user interface architectures. How are you going to be able to customize when you have to look at the impact across a dozen different tools? What is your response time going to be like when you're trying to navigate traceability links?

That's why both MKS (perhaps with some exceptions) and Neuma, two Canadian companies, have set the architecture in place to include the repository and process engine across the ALM suite. The result is that both of these solutions are not just good ALM tools, but are easily and extensively customizable. When the framework is common, R&D goes into the framework so that all of the functions may benefit. In fact, Neuma is expected, later this year, to introduce a CM and/or ALM Toolkit product which consists primarily of its framework. That way, even the NIH (not invented here) syndrome can be addressed, and those who like to fine tune process, and extend it throughout the business, can have a strong starting point.

What do I get From 3rd Generation ALM
So ALM is the 3rd generation of CM. What does it get me and what can I expect to see in a 4G ALM solution?

A third generation ALM solution should allow you to get the information you need to run your shop: requirements traceability, release contents, iteration advances, backlog, current assignments, process documentation, etc.

And if you have a multiple site solution for source code in your 2G solution, you should be getting a multiple site solution for all of your project/product data in the 3G ALM solution.

If you've got a team maintaining and supporting your 2G solution, you should now have 2 or 3 part-time CMers sparing off each other during vacation times, while you put the rest of the old CM team to work on core business. No wasting time going to each end-user's workstation to install or fix problems. Software should be in a common location, such as might be the case if it uses a thin client.

Your ALM training must be modular, by function, but simple. The framework first, then the functions - and training on these functions should be training on your process and how the tools support them, not "their" tools and what parts of the process we can support. Because of this, ALM training should be an "in-house" capability, even if it is outsourced. The courses keep up with your evolving process. If your end-user courses are taking days or weeks, your solution is too difficult to use. Do you need training?  Yes. ALM is complex and any module is complex from a process perspective. There's typically some terminology to master. The training should really be centered on process, though, and not working around the inadequacies of the tools.

If the vendor insists on doing customization, the overhead, delays and costs will be too great. If simple user interface or process changes regularly require planning and cost estimates, the tools are too complex or inflexible.You should be able to run your meetings from your ALM tool. No more distributing reports (even in PDF format) for people to digest. The agenda items should come from a data query in the repository. You want to be able to navigate issues completely as they come up in the meeting. The 3G ALM tool should have the traceability and the navigational capabilities to do this. Decisions should be recorded in the repository data, not in the minutes. And it had better be sufficiently peppy to do all of this.

Reports should virtually disappear from your work flow. Instead, dashboards and query capabilities should appear. No more wading through layers of useless data on paper to get the information you need. What information do you need? Why is the report being created (typically to address a series of needs)? You should replace reports with data navigation using quick links from the user interface to get to the data you need.

Your ALM tool should be the primary communication tool. After all, all the project information is there. You should be able to customize your user interface so that you see the precise information that you need: source trees, to-do lists, priority issues, process guidance, etc.

The list goes on. The ALM solution is much more than a set of point functional solutions glued together.

The Next Generation
What's next? In a 4G solution (will it have a name different from ALM?) you should be able to create and customize dashboards in not much more than the time it would otherwise take you to track down the information content. You should be able to specify and change the context of your dashboard: from product/release to product/release, from user to user, from project to project, etc. without having to leave your dashboard. You should be able to place summary information in the format you want on the dashboard, and drill down into more and more detailed information.

You should have a slate of dashboards (and work stations) that are specific to the roles and tasks of those roles: a peer review dashboard, a build comparison dashboard, time sheet summary dashboards with project roll-ups, development status for the current iteration and/or release, requirements traceabililty without a full-blown matrix to navigate, and so forth.

Your branching strategy is built into the 4G solution so that it tells you when you need to branch or merge and no more need to educate users on complex branching strategies. It also has the capabilities necessary to reduce the need for branching dramatically, by eliminating the need to branch for purposes other than parallel release development.

Your ALM tool usage will always be in terms of a context which will allow the tool to pick up information automatically as you work and create new artifacts. Traceability links become automatic, not unreliably filled by the end-user.Administrative efforts such as consistent backups and stand-by disaster recovery become automated. Eliminate the effort, eliminate the error. Multiple site support approaches zero as the framework handles the job across all functions.

Information content becomes richer, but not only in terms of data, navigation diagrams, etc. Does the need for human direction, communication and leadership disappear? Not at all. It becomes much enhanced because everyone's on the same page, and the strategies and direction can be communicated instead of the details which are managed and communicated within the ALM solution.

Need more role specific attention from your ALM tool. The 4G solution should allow you to make it so, quickly. The goal of the 4G solution is to take the different roles and, one by one, improve productivity in them. Fine tune, add missing capabilities. Create useful dashboards to make the tool easy to use with minimal navigation necessary.

Something missing from your solution - add it on in time for next week's iteration or meeting. The 4G ALM solution is extensible to encompass all related business processes from HR and Budgeting, to Sales and CRM. Ideally, these extensions are made available to others for quick add-ons.

As technology evolves, your tablet should be part of your ALM interface. Your smartphone should, too, but perhaps quite differently. I think it's too soon to tell what forms will be where. You'll need keyboards for some things (e.g., raising a problem report), but generally not for query and navigation. But will you use buttons or your accelerometer?

When to Switch from CM to ALM
Should we go and implement ALM tomorrow? No. Yesterday would be much better. Can it be done that fast? There are ALM tools that can be put in place quickly. Evaluation. Data population. Evaluation. Customization. Pilot training and use. Evaluation. Full cutover. Those are the steps you should use. Your vendor should be happy to help, at least up to the end of the second evaluation, at no cost. If not, put them further down on the list.

It's hard to get a team to move from CM to ALM. Why, because the part of the team using CM will not be the only beneficiaries of the ALM tool. So they'll resist change. Education is important, but not what ALM is, or how the new tool works. Instead, we're having these problems and our ALM solution will address them or the benefits to you and in your role will be, etc. If you educate like that, and involve the team in the process (after some education), the buy in will be greater.

If you're starting a new project, don't start with a 2G CM solution. Start with ALM. If ALM seems "too much", look at a different solution. ALM does not have to be too much. A later generation solution should always make life easier, not more difficult. This is especially true in an agile environment where there's more resistance to formal tools and processes which impose themselves, it's critical to have advanced tools that can support the higher demands of Agile, while making life easier for all of the team roles.

If you've looked and are intimidated by the ALM world, look harder until someone tells you why ALM is less intrusive than CM alone. If they can't, then you have a right to be intimidated, as the solution is probably still half-baked, or perhaps suggests a rigid process that doesn't fit the mold. Move on to the next one. With ALM solutions, you'll have to look deeper under the hood than with CM solutions, to make sure that end-to-end management can accommodate your process without a lot of u- front costs. 3rd and 4th generation solutions should eliminate most administration while drastically improving functionality and ease-of-use.

I'm glad the rescue team stayed their course, in spite of criticism early on. Drilling bore holes instead of rescue tunnels. Involving NASA. Using advanced technology. Doing things right. In the end, the miners were out in 2 months, 50 percent ahead of schedule! Look beyond CM and you'll see improvements in your engineering team capabilities, but also, and perhaps more importantly, in getting the product out the door.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.