Five Mistakes a Company Can Make When Using Configuration Management

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

Summary:
Joe Farah details five mistakes a company can make when using configuration management (CM). Until we start to admit to our mistakes and strive to reach the next generation of CM, we'll stagnate.

I remember it clearly. It was about ten years ago, during the .com boom.  A new telecom startup had lots of money.  They wanted to do configuration management (CM) the right way. So they bought the most expensive tools and hired consultants to configure and glue things together.  Three months later they were still working at integrating CM with problem tracking, the way they wanted it.  That company went belly up quickly.  From the ashes, a new company was born. The founders didn't want to make the same mistake.  After letting them view a different solution for a few weeks, I went in there and in three days had the team trained and working on a solution that already did what they were trying to do and a lot more.  In three days, all of their data (from the previous company) was loaded in and users were trained to support and use the system. The solution cost: about the same as they paid for their three months consulting previously, and far less than their previous licensing costs.

Mistake #1:  We assume that the most expensive, or most prevalent solution is the best
Why is this not the case?  Is Windows the best?  It may be getting there, but the Mac has put it to shame over the last quarter century, even though Mac's market share remains low. (By the way, I'm not, and never have been, a Mac user.)  What about rockets?  A little company (less than 1000) named "SpaceX" is raising eyebrows as it promises to lower the cost to space by an order of magnitude, with it's first successful launch last year and it's heavier lift vehicles targeted for this year.

CM is a complex business.  The market leaders got there by carving out a niche and growing market share on it.  Maybe its freeware (CVS), COTS packaging (PVCS), virtual file system (ClearCase).  Whatever the reasons, these have captured market share.  But don't make the mistake of assuming the market leading tools are the best.  Look around and you'll see more able, more reliable and less costly, and less risky, solutions.  Just as for CVS, SVN goes one better, the same likely holds for any solution that is not delivering.

Branching Strategy is Key
On another occasion I was exposed to a project while they were deciding on where to go from their current CM solution.  They needed to work in a multiple site environment and they were taking the time to make sure that their revised branching strategy was going to work.  They had spent a couple of months on it and had a strategy in the review process, but just weren't yet sure how well it would work.  They did know that they would have to spend some time training the staff in the strategy and implement some quality controls to ensure that the strategy was being followed properly.  In the mean time, another company about the same size, needing to work in a multple site environment, had a very simple release-based branching strategy with exclusive checkouts.  It took them virtually no time to train staff and they used the tool to enforce the strategy.

Mistake #2:  We develop ever more complex branching strategies to deal with ever more complex CM.
Why were these two situations so different?  More than likely, the reason is one of process and one of matching technology to the process.  The CM world invented branching almost 40 years ago.  It allowed them to work on the next release in parallel with supporting the current one.  It was and is a great concept and a valuable tool.  However, processes evolved that said:

  • We need to provide a common support area for our developers.
  • We need to track various testing promotion levels.
  • We need to keep files associated with a single change together.
  • We need to track parts of a change according to who made them.
  • We need to keep track of baselines and specific builds.
  • We need to have multiple variants of our baselines.

It's good to see process evolving.  When the technology doesn't evolve as well, though, guess what happens.  Well two things happened.  The first was the invention of the main branch.  The second was the overloading of the branching concept with a myriad set of reasons and the addition of labelling technology to help to sort out the mess caused by this overloading.

I know there are two camps out there:  MAIN branch, and main-per-release branch.  I hope there won't always be.  Why is there a MAIN branch camp?  Because we sculpt our tools to support it. However well we sculpt, though, a MAIN branch solution will always be burdened with the following problems:

  • When do we switch the main branch from release 2 to release 3?
  • What do we do with release 3 changes that aren't ready to merge at that time?
  • What happens to release 2 changes that aren't yet checked in?
  • How will we define the branching structure to continue to support release 2 after it moves off MAIN and how do we instruct developers?
  • What about release 4 changes?  Where do we put them in the mean time?
  • How do we make the CM tool clearly show us when the MAIN branch switched releases and which branches are used for other releases at which points?
  • How do I have to change my rules for views or context when we switch MAIN releases?

You can probably add to this list.  In a branch-per-release strategy, each release has its own branch.  You always do release 2 specific work in the release 2 branch, and release 3 specific work in the release 3 branch.  Now if your tools support the branch-per-release concept, they will help you.  For example, they will let you know when you make a release 2 change, that it may have to be applied to release 3 or later streams as well.  They will give you the option of automatically inheriting changes to an earlier release to the next and later releases, letting you establish this as a default policy that you need to override when you don't want it, or having you choose on a case-by-case basis.  If your tool has a release stream context for development, it can automatically tell you when you need to branch for reasons of parallel release development, rather than you having to instruct your staff on how to determine when to branch.

The second thing that I mentioned was the overloading of the branching concept.  This is because the CM technology did not evolve to support the evolving process.  Older technology may require that you branch code in order support promotion levels.  It may require that you branch code so that you can label it with the feature, the change, the developer, etc.  It may require that you create branches and/or add labels to identify baselines and builds. It may require that you branch code to support short-term (i.e. change duration) parallel changes.  On top of all of this, there's a lot of labeling and merging that results - and unfortunately, that's where the CM technology tends to evolve - to make these tasks easier and less resource intensive, rather than eliminating most of them in the first place. 

Now consider this:  if you were looking at new tools, would you look first at how the tools simplify the entire branching architecture, or would it be more important to you that the tools provide advanced multi-way merging, have multiple ways of viewing branching, or have enhanced labeling mechanisms and a means to categorize and navigate labels?

If that's not enough to consider, assume that your product has been bought out by a new company; or that your development contract does not get renewed and the source code is being returned to the owner; or consider that one site is being shut down and all of the development code is being consolidated at a single site (without the staff from the other site).  How long will it take the owner to unravel all of the branches and labels that exist in the development environment, even if the CM tool kept perfect track of them all?  The answer here should be, at most, a few hours, not a few weeks or months or even years.  If you think that's impossible, all it means is that it's time for you to review CM technology again.

Branching strategy is key, it's true, but the minimal strategy that meets the requirements must take precedence over the minimal CM tool functionality mentality.  CM tools need to deal with promotion without forcing every file to branch.  The same goes for change packaging, short term parallel changes,  change owner identification, baseline and build identification.  Design teams need to use the tools effectively, handling variants as a software engineering capability first, and only when that's in place, using CM to support it.  Undisciplined branching will eventually contribute to the downfall of a project.

A Monolithic Monster
I remember the days of portable common tool environment, "backplane" technology, and other efforts to get all of the best of breed management tools to work together in a cooperative manner:  requirements, test management, change management, version control, document management, build and release management.  Contrast that with the companies that are trying to build all of those things into one giant monolithic monster.  I would rather imagine having the best of breed and them all working together.  Twenty-something years later and I find that we're still working at such a solution.

If I'm using tools A, B, and C, then you give me a way of helping them work together so that I have better overall management and traceability, that's great.  In fact, that's fantastic.  I don't have to create the glue myself and, even better, I don't have to maintain it.  In these days of open source, especially, someone else is doing that for me.

Mistake #3:  Common Backplane Schemes Reduce Effort and Costs, but They Don't Come Close to Well-Designed Monolithic Tools
I've been doing software development since the late '60s and there's one thing I've noticed:  it's generally better, less costly and takes less time to add-on your own mouse trap than to try to integrate with an existing one.  There are exceptions, of course.  I wouldn't want to build an OS onto my CM tool and then try to sell the CM tool with the OS, but a CM tool is layered on top of an OS, while a problem-tracking tool is not. Nor is a requirements tool or a release management tool.  I wouldn't say that a problem tracking company could relatively easily build in a CM component. 

CM is the heart of the ALM cycle.  It deals with configurations and changes to the configurations.  That's a complex process.  It requires data repositories, process engines, advanced user interfaces, etc.  If you use these "engines" as a common platform for all of the N ALM functions, you simplify administration by a factor of N.  You don't have to figure out how N multiple site solutions will work together or build message structures and concepts for interchanging data among N different tools. You also don't have to build glue for N*N/2 tool integrations or adjust N-1 pieces of glue every time one of the N pieces are upgraded.  In fact, you probably won't even need training on N tools and can probably customize all N tools in pretty much the same way.

With a monolithic system, if you're building an ALM tool, you can put most of your energy into the common engines, which will benefit all functions.  With the resulting resource savings, you can spend more time on individual functions to tailor them more specifically to the requirements of the day.  This is precisely what two Canadian companies (MKS and Neuma) have decided is the best approach.  Look also at IBM, a company that is recognizing this and making a valiant effort to take old technology and merge it into one of them monolithic "monsters".  Bravo! The payback is way more than you can imagine if you have not yet been exposed to 2such systems.

In a monolithic system, you have all of the data at hand.  It's in a single repository so traceability is easy and fast, and is easily supported by a common process engine.  Hopefully, you have a common user interface that is customized by role rather than by CM function, making it more user friendly and easier to navigate.  That in turn allows you to focus on higher-level roles, like CRBs, product managers, VPs and Directors, etc.  It also allows you to extend your ALM tools to support live meetings, since all of the data you need for decision making is at hand, hopefully in a well designed format that allows you both management summaries and drill down to details and across traceability links.

I've been exposed to a number of monolithic systems, even some bad ones.  Though I would stay very far away from the bad ones, I would never start a project with a glued together solution or even with just a small portion of the functions in one tool.  I'd always start with the "monster".

We Have The Expertise
Nortel selected a CM tool for corporate use because it was the "best".  Really it was because they had expertise on board that knew the product.  They had a few successful products that were using the tool, each with its own CM customization team.  They knew it needed a lot of customization, consulting, administration, etc. But they also argued that they had the best minds, were up for the task and that money wasn't the issue.  They took on the tool as a challenge, one that could beat out its own home grown technology.  They probably put somewhere between $100 million and a quarter billion dollars into customizing, administering and operating their CM system.  Today, Nortel is under bankruptcy protection.

Mistake #4:  We Hold Onto a System Because of our Familiarity With it, Regardless of the Cost
There are many places I've been where the choice for the CM technology was based on the experience of the CM manager making the decision.  One tends to go with that with which one is familiar.  One tends not to look so much at the cost.  And why?  Well if the solution requires lots of training for administration, customization and just plain usage, and that training takes lots of time and money, and I've already been through the training, why shouldn't I hasten the time to deployment and cut training costs by going with something I already know?  Well, why not?  The reason may be that there is technology out there that doesn't require nearly the same amount of training and has better functionality and operating costs as well.  There are technical people out there that still like to use only command line interfaces because they know them well and can do anything with them - be it a CM tool or an OS (e.g., Unix).  I don't have any problem with that, but I do when they then try to impose it on the entire team.  I know of companies that are hurting but are burdened with large CM operating costs and I suspect that there will be more than one such company where such costs help prevent it from surviving the current economic crisis.  When you look for technology, don't start from where you were 5 or 10 years ago, look at the market anew.

A similar situation arises in building your solution in-house.  Nortel, did this in the mid-'70s, and had perhaps the best CM system of its time, running on an IBM mainframe.  AT&T had similar success and even took their system commercial at one point.  IBM, as well.  These were great systems and while Nortel, AT&T and IBM are able to finance such ventures, they are not their core competence.  These companies did so in their day because there were no other solutions that compared favorably.  They held on to them for long periods and built large teams around them, even though they started out as very small team efforts, even through initial maturity.  As the teams grew in size, the business case for them became less compelling.  This is why Nortel started moving away, about a decade ago, from its in-house CM, though I'm sure much of it is still in use.  They had the expertise, but they didn't manage the overall costs well.  Less than $1 million to create a CM tool that saved them millions - then they grew the team so that it was too expensive to evolve.  Rather than try to spin out the group into a separate company, or reduce it back down to its original size of a dozen people or less, they abandoned their own in-house tool and moved on to new technology.  A good move perhaps, except guess what?  Mistake #1 is followed by mistake #4.

Change Control, Not File Control
CM tools evolved in the late '60s and early '70s, with the advent of SCCS and other less known tools.  These would allow you to store multiple versions of a file in a single file, delta compressed. This capability evolved and evolved - but, with some exceptions, remaining file based.  A few organizations recognized the need for change-based CM, tracking changes, not to file, but which were implemented by revisions to a set of files.  Even Nortel's home system realized this (hence, it's longevity).  Commercial vendors, though, with the exception of a couple, pushed the file-based agenda. 

Mistake #5:  Change-Based CM, not File-Based
It wasn't until the late '90s that most vendors conceded that change packaging was essential and file-based CM just got you into trouble. So the scramble started to improve tools and few did an admirable job.  Most looked at adding a separate concept called a change package or task or similar so that files could be tied together into a change package, but required the use of additional tools or user interfaces.  As we turned the century, some realized that there were actually some benefits to looking at things from the change perspective rather than from a file perspective.

Very few CM tools have grown up with or have moved to a change-centric focus.  As a result, most developers still look at file-based tools and what improvements can be made from a file-based perspective.  A change-centric CM tool makes life so much easier.  It makes CM natural - not an extra process that developers must put up with.  In fact, it greatly reduces their effort.  Imagine merging a change from release 3 to release 2 by selecting the change and saying, "merge please".  This is so much different than identifying all of the files that changed to implement a feature, looking at the branching history of each and figuring out which files need to merge and then merging each one.  Or imagine looking through a set of 20 changes for the smoking gun that broke the build, as opposed to finding and looking through the 73 files that were involved in those changes.

CM's rather slow evolution, at least compared to other technologies (other than RDBMS systems), is largely due to the file-centric view that still dominates a number of tools.  Perhaps some can be forgiven, if they evolved from a hardware support background.  Hardware CM is primarily a part-based change tracking system to this day.  That's because hardware has a greater affinity to parts.  My mouse stops working:  What part failed?  I've got no power:  Is it the power wiring or the power supply?  Not to say that hardware can't benefit from change packages, it is just that it can survive better without them.  This is especially true because of the more rigid ECR/ECO process built into most hardware development shops.

Software is functional. The menu button is not highlighted.  Maybe it's the specification or are there certain conditions somewhere that have to be met?  It could be a bug in the menu software or did someone overwrite some memory?  Maybe I’m out of resources? Software functions span the architecture.  A bug doesn't point to a file like a flat tire points to a wheel.  If it used to work, it may point to a recent change.  As well, software doesn't wear out like hardware, and isn't as environmentally particular.  The failure is not fatigue or operating conditions.  Software changes usually involve more than one file, possibly in totally disjoint parts of the system (e.g., the user interface and the database).  Fixes can usually be done in multiple ways.  We can check for the symptom and fail, we can find the root cause and fix it there, we can disable that functionality, or if that's in the stable code, we can code around using it, and so on.

Change-based CM has been embraced too late by the industry.  It really won't be much longer that file-based CM is tolerated, I hope.

What's a Few Mistakes?
So these are some of the industry's mistakes, whether by tool vendors, process engineers or CM teams.  What's the impact?  Has this really slowed us down?  Well, yes.  In fact, YES.  Whereas operating systems have moved from very complex tuneable systems that software engineers had to customize and babysit, to plug-in-and-play software to which anyone can make substantial customizations, CM is largely still a technical mess - and in many cases we continue to build capabilities to manage the mess.

CM is complex, but so are operating systems.  Although more expansive, an OS is still not as complex as CM.  Why?  Because one deals with making well behaved processes conform to the software rules for the benefit of one or a few individuals, while the other deals with trying to get a team to work together.  In my years I have contemplated, with more than one CM system, evolving the technology into an OS platform.  Only one OS (VMS) have I ever considered evolving the technology into a CM system.  I've done neither, but keep my options open on the former. 

CM is complex and difficult, but the user roles must not be.  Even though hundreds of users are working with tens of thousands of files, hundreds or thousands of requirements, thousands upon thousands of problem reports/defects, myriads of test cases, in multiple development/support streams of dozens of products having several variants, it must not be complex and difficult for the user.

It's time to throw away the knives and stones and start using real technology.  It's time to capture the data from the thousands of user actions each day and figure out what's going on, and even further, provide active feedback to the users so that rather than asking the CM tool questions, the CM tool is providing the answers before they're asked.  You need to branch.  Your workspace file is different than the one in your view.  This change may have to be applied to other "future" streams.  Here's all the information you need to know about why this line of code was changed.  Your schedule is at risk because of x, y and z.  Here's what your customer is using now and this is what the functional and fix differences are between that product and what you're releasing tomorrow.

Impressive.  Pie-in-the-sky, perhaps?  Perhaps not.  In fact, let's go one step further.  We have accountants that have variants of their spreadsheets as they work through their reconciliations. We've got lawyers that evolve their agreements as they work through negotiations.  We've got sons and daughters who work through essays and theses.  We've got physicists who repeat experiments, subtly different from the previous run, or many the same with different results.  These have not yet even been exposed to CM, yet they are just as much in need as developers.  Can we actually make CM tools that are easy enough for such a class of users?  You betcha.  We can.  We don't.  We'd rather evolve our band-aids to deal with our knives and stones rather than move on.

Surely I've only touched on a few of our mistakes.  And surely new vendors sprout up with a great new concept that drives sales for their CM tools.  But we've got to move away from the days of "I like this CPU instruction set" and "SRAM has a lot of advanatages over DRAM" to the days of "Here's a computer that your grandma can use."  We haven't done this with CM, and until the industry starts looking seriously at the potential, it will be content with replacing the knife with a Swiss Army Knife, and the stones with rubies and gems.  Until we start to admit to our mistakes and strive to reach the next generation of CM, we'll stagnate.

Ask yourself, are the processes and tools I'm using today for CM substantially different than the ones I was using 5 years ago, 10 years ago, 15 years ago?  Or are they just packaged prettier, with add-ons to make some of the admin-intensive tasks more tolerable?  I know of 2 or 3 companies who can (and many that will) say their tools are significantly better than 3 years ago.  I know of many, many more who have just spruced up the package, or perhaps customized it for another sector.  Nothing wrong with that, but that won't get us ahead.  Let's hope the 2 or 3 can stick with it and bring about the change needed to drag the rest of the industry forward.  That's exactly what I think is going to happen... and then the pace will quicken.

User Comments

2 comments
Bikefar's picture
Bikefar

Regarding CM mistakes - I have two.

1. Don't think that if you have a CM tool, you are doing CM. A CM tool helps you do CM.

2. Configuration Management does not mean a manager has to manage it. CM is where the business side and the techie side meet.

Suggestion for topic; what does Configuration Managers do when they stop doing it?

February 22, 2019 - 6:03am
Joe Farah's picture

Great comments Peter... I've seen a lot of companies with CM tools and no process!  Also, I agree with your 2nd mistake... it should not be the CM manager who is doing CM.  But a lot of processes and tools necessitate someone to fill in the gaps.  I've also seen organizations using, for example, CM+ with no CM manager.  Once the process is in place, and given that the tool provides good process coverage, and is reliable, there really is no need for the CM manager to do CM.  Instead (s)he can focus on evolving process and reaching out more into the business side of the organization.

February 22, 2019 - 8:54am

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.