How Vendors Can Move the CM and ALM Industries Forward

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

Summary:
Joe Farah writes from a vendor's perspective where he explains what to do help the configuration management (CM) and application lifecycle management (ALM) industry move forward. Make sure the CM and ALM components and features that you're working on are well defined.

This month I thought I would write an article from a vendor's perspective. As vendors, what do we need to do to help the CM and ALM industry move forward? This, in itself, is a loaded question because forward assumes a starting point and the industry is not at a common starting point. There are mostly second-generation solutions out there, with many vendors trying to build third generation functionality onto them.

There's a problem there. Elon Musk may have said it best after the splashdown of his Dragon capsule this month, when he said that when you assume existing technologies, you assume their cost structures as well.

Trying to build the next generation space access vehicle from components of the Space Shuttle (i.e., STS) may save some costs in some parts of the development, but overall, the solution will be relegated to the cost structure of STS. The Falcon/Dragon technology of SpaceX started with a few key principles which has allowed SpaceX to set themselves apart from the others:

  • Build what we need internally (keep control of our destiny)
  • Use common technology across the subsystems and products
  • Perform extensive testing, with heavy automation
  • Create the ability to react quickly to problems
  • Keep administration and overhead low.
  • Reuse components where possible (that is, reuse of the rocket/capsule for a second launch)

Elon, diplomatically, also said that the NASA-Commercial partnership works.  They've proven it.  It does work, but only with the right commercial partners, as we're sure to see in the future.  NASA, to their credit, recognizes that not only do they have a lot to teach SpaceX, but they also have a lot to learn from SpaceX.  Except for a launch escape system (which the shuttle does not have), the Falcon/Dragon is very close to having the ability to support manned flight, at a small fraction of the cost of any previous US-manned space program.

How does this apply to ALM?  From a vendor perspective, it's not easy to just throw everything away and start over again.  That doesn't mean we can't offer more and more functionality and capability.  It does mean that without a fresh start, what we produce is constrained, in cost structure and architecture. Let's look at the past to a few examples.

In the 1990 time frame, Atria took the experience of the Apollo DSEE version control model, and applied it to a new product, called ClearCase.  Had Atria attempted to build out from Apollo DSEE, they would have carved out a nice little niche market rather than taking the market by storm.  ClearCase has been dramatically successful, though its 20-year architecture is showing signs of constraint.  Still, a little polish here, some new infrastructure there, and a nice product evolves in 2010 (RTC).  Is it a full 3rd generation CM/ALM product?  Yes and no.  Lots of 3G capability from the end-user experience, lots of 2G from the back end.  But that's because it is somewhat constrained.

In the mid-90s, Perforce started from scratch with a product.  Again, this was very successful again.   Primarily this is because they were unconstrained in their approach.  They were able to say, "We don't want the admin headaches that other tools show," and also, "We can't have the performance issues characteristic of the leading vendors".  With those goals in mind, they successfully created their CM product.

In the '70s and '80s, yours truly produced some very capable 2G CM/ALM tools for Nortel (then Bell-Northern Research), and Mitel, both large Ottawa telecomm companies.  When an attempt in 1989 to acquire the Mitel ALM technology failed, Neuma started to create a new product from scratch.   However, like Elon Musk did at SpaceX, Neuma looked at the full industry requirements for ALM and decided, not to build a CM tool or an ALM tool, as had been done at BNR and Mitel, but instead to build an architecture that could endure.  As a result of this, Neuma moved forward, with some mistakes, but at the turn of the century, came out with a full 3G CM/ALM tool and moved forward from there to a 4G tool recently released.

Looking back, Neuma was able to do this by following some specific guiding principles:

  • Build what we need internally
  • Use common technology across the subsystems and products
  • Extensive testing, with heavy automation
  • Ability to react quickly to problems
  • Low administration.
  • Ability to do extensive, re-usable, customization, easily

A recent general forum question asks which is better:  Best-of-breed tools integrated together, or an integrated ALM solution?  I'll go one step further and break down integrated ALM solution into common vendor integrated solution versus common core integrated solution.

It's a good time to learn a lesson from Elon Musk and SpaceX.  We're heading into a new year shortly.  If we want to continue to deliver next generation tools to the market, as vendors, we need to focus on a few things going forward.

  • Letting customers define "best-of-breed”
  • Bringing costs down for ALM tools
  • Focus on common core technology
  • Rapid response to change requests.
  • App-accessible ALM functionality

A bit of explanation follows.

Letting Customers Define "Best-of-Breed”
The number one requirement of an ALM tool is to support an organization's process the way they want.  Neuma discovered this when they did their market research in 1990.  It's the same today. A best-of-breed tool is not defined by the tool’s capabilities, but rather by the customer's requirements.

Each customer is going to have different requirements.  The ALM tool must be able to support these.  By all means provide guidelines - don't let them do file-based CM when change-based CM is so obviously superior in all cases. (I'm talking software CM here - this does not necessarily apply as clearly elsewhere.)  But make sure you can support the customer's process.

No problem!  We'll just give them a compiler or Perl and they can do anything.  We'll also throw in a GUI drawing tool and maybe an RDBMS system.  That should do it, except for the word processor for documentation changes.

That certainly lets the customer do what they want.  The problem is that it's too costly for the customer to do what they want.  That’s not a problem.  We'll make expertise available at a reasonable rate.  And this works if the company does not go broke first, and has plenty of time to get the solution in place.  Unfortunately, I've seen first hand where a company went broke first after spending too much to get the solution in place, running out of time before they did.

If you want to let a customer define the "best-of-breed” tool, they must have very high level tools that allow them to do so, not in years or months, but in days or hours.  Is that possible?  In 1970, IBM would have said no if we asked whether or not users could build their own computer to meet their needs in days or hours.  Now, they can log into a Dell (or other vendor) site, select a starting configuration, change options, base software, processors, etc. and in a few days, they have their machine delivered to their door.

CM and ALM components/features need to be much more clearly defined so that infrastructures can be built, just as Dell did, to create the product quickly and easily.  One of the reasons Neuma was successful in creating 4G CM/ALM is because they focused on the infrastructure, and making it easy to let their customers define "best" in their own terms.

Bringing Costs Down for ALM Tools
Say that you need a low-cost ALM tool, open source, free of charge with noo cost tool.  That fits the budget, right?  Well, in a sense, as long as we restrict our definition of cost to the cost for acquiring the tool.  What about the costs for:

  • Training
  • Data Import (and, eventually, export)
  • Maintenance
  • Performing upgrades
  • Customization and process support
  • Integration with other tools and data sources
  • Additional tool components not included
  • Administration
  • Multiple site operation
  • Disaster recovery and backups
  • Security
  • Down time costs
  • Productivity for each end-user role, including communication productivity
  • Hardware/network platforms/performance
  • Repository and process engine infrastructure

It is important that license acquisition not cost you an arm and a leg.  But training is typically even a bigger cost.  In some cases administration is an equal cost.  And customization costs run from not-as-much to out-of-the-ballpark, not to mention that such customizations must then be supported and must survive upgrades.  Then there's the integration of a few tools, which is fine, as it’s a one-time cost, as long as the tools never change.  Also, multiple site:  Did you mean for the source code or for the problem report database? There are a few other cases too, along with consistent backups.  This may not be a problem if your multiple site solution is just replication of everything everywhere.

The picture is clear enough.  If the goals going in are not to reduce all of these costs, we'll inherit the cost structure of the legacy pieces, even if we use Open Source software.

Focus on Common Core Technology
This is certainly an approach that will help cut down costs. If we could use the same process engine, database, multiple site capability, and customization technology for all of our ALM tools, we will reduce training costs, administration and a bunch of other costs.

At the same time, we can focus on the core capabilities, enabling better reporting, traceability navigation, dashboard generation, advanced data management, reliability, etc.  All of the components of the ALM suite can benefit significantly from each significant advance.  We can also afford to spend time making each capability the best, because it is helping all of the tool components, making it cost effective to do so.

Better yet, we won't get different technology variants of the same problems in each of the tools. One problem, one fix.  The integration of the separate tools becomes trivial because the common core components are already integrated, so it's more a matter of user interface consistency and good data schema for providing the traceability we need.

Rapid Response to Change Requests
Customers will want changes after they find problems and we need to be responsive to these.  It's no longer good enough to say, "We'll look at that for the next major release".  It's not even good enough if we replace "major" with "minor." Customers want changes now.

Changes will fall into two main categories:  those that can be done through customization of the existing tool release; and, those that need changes to the tool release.

That's very straightforward. So how do we move forward here?  Neuma claims that more than 95% of its change requests can be done in the existing tool release, and usually through a simple email exchange, or even over the phone. And most problems can be worked around easily.  Most vendors have some level of this capability, but it's not at a 95 percent level.

The goals are two-fold: move as many of the “need new release” changes into the "can be done in current release" category; and change the "can be done" to "easy to do." There is wide variation in the industry.  I've seen changes that take weeks on one system that take minutes on another.  Which do think costs more?  Which customer was happier with the response?

ALM tool architecture mandates that since it is easy to change your process, it must be easy to change the tool to support it.  Whether it's terminology, triggers, state-flow, user interface, or even customization tools, it should be easy to make changes to make these work better. Don't think of it as giving away your customization services.  Think of it as having the ability to do more with a given set of customization services. Because if you don't someone else will. This has to be the vendor's attitude.

Then, when you're planning a new release of your tool, make sure that a very large portion of the proposed features go into making the tool easier to change.  Add in a higher level of change perspective too, so that instead of dealing with "labels", the user can deal with "product road map" or "baseline creation." The user doesn't want to be in the weeds.  They just want to know that the weeds work the way they're supposed to.  Don't tell me how to place widgets on my dashboard easily - do it for me.  Don't tell me how to create complex widgets - give me a checkbox or a pic-list that will do it.  Don't hand me a bunch of scripts - show me the organized data which drives the process.  That's what the user wants, and that's what the vendor better start delivering.

APP Accessible ALM Functionality
This one came on like a storm over the past year or so.  Mobile devices, tablets, etc., have a new user interface paradigm:  Apps. They're easy to use with little or no documentation.  Not only do I need the same ease-of-use in my ALM role, I need an app that will let me do a lot of my work remotely.  Check on progress.  Give approvals.  Create a new baseline.  Identify the new features for the customer whose site I'm at installing a new release.

So, the information has to be mobile.  There are many ways to achieve this.  And I don't have all the answers - the technology is just too new - it's changing too fast.  Maybe we want a smart client on the tablet or phone, rather than a thin client to the central site, so that when I lose connectivity, I still have the answers.  Or maybe that's the wrong architecture or cost structure..

What I do recommend is that vendors watch this space carefully and move forward with the correct decisions - making the wrong moves too soon can be costly. However, making no moves at all may leave you with a knowledge gap.

Conclusions
You're probably reading this, not as a vendor, but as a CM user.  I'll bet you've heard a lot of things you'd like to hear.  Tell your CM/ALM vendor community that you're ready to move into the next generation.  You don't want to use a mainframe computer when a mobile tablet is now available to you, with more power and a better interface.   You don't want to drive a Model-T when a Nissan Leaf or Chevy Volt is more to your budget and liking (don't worry - the price of electrics will go down quickly, and the range will increase dramatically over the next few years).  Tell them you want a 3G or 4G solution now.

Vendors, the difficult part of CM technology is not the technology itself. It's the requirements, figuring out the ease-of-use, getting the cost of sales and support down.  Don't be afraid to start a pilot project which has access to your existing technology, and requirements, but which can cut the cords of constraint imposed by legacy components.   It's time to create next generation solutions.  Sure, keep the polish out, keep the band-aids available, add on a bunch of nice-looking contraptions, etc. until your new technology is ready.   But if you don't cut the cords, you'll be left in the dust.  I hope you'll take this article as a partial blueprint.  Because I don't care how much support you throw at that old DOS clone, you won't convince the user that he has a tablet.  And that's why you'll see so many new OS platforms evolving this decade.

The only additional word of advice that I can add: make sure the CM and ALM components and features that you're working on are well defined. Not well-defined to fit all legacy tools, but rather well-defined to meet the next generation.

Elon Musk said he's not in it for the profit, but that he's in it to get easy, reliable, cost-affordable space access so that we can go beyond a few cameo space accomplishments.  Be certain, though, that the profits will come because of this.  Similarly, vendors, make sure that you're in it for the advance of CM/ALM capabilties - to take it beyond our techies and software projects to the wider world of information. If you can achieve this with your tools, the profits will come.

 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.