The Past, Present, and Future of CM Tools: An Interview with Joe Farah

We're sitting down with CM experts to discuss not just their backgrounds in the field, but what their favorite tools are and why. This week, Joe Farah at Neuma Technology provides a great deal of information on what tools are leading the way today, and what we can expect from them in the future.

Noel: What's the primary CM tool you're working with, and how long have you been working with it?

Joe: I have been working primarily with Neuma's CM+ for the past twenty-four years. CM+ has supplied all of our needs in the areas of: version control, build and release management, change control, requirements management, document management, test suite management, change request management, problem tracking and project management.

Noel: What tools do you have past experience with, and what made you switch to your current favorite?

Joe: In the past I have worked with SCCS (Unix), CMS (DEC/VMS), PLS (Nortel/BNR proprietary), SMS (Mitel proprietary). I am familiar with, to a lesser extent, various other tools including ClearCase (IBM), and Subversion (Open Source), but have not used these on any projects. In my case, CM+ is my preferred tool because we have been able to rapidly expand its capabilities both as a user and as a vendor.

Noel: What allows a tool to be scalable or adaptable to other organizations/companies?

Joe: Scalability and adaptability are two separate issues. For scalability, there are two key issues: performance, and resource requirements. Efficient SCM/ALM tools such as PerForce and CM+ can support hundreds to thousands of users with little regard to performance tuning or to the level of hardware hosting the CM repository/server. Other tools, such as ClearCase (IBM), or Dimensions (Serena), are what I call "Big-IT" solutions. They require that you manage your hardware if you wish to scale up the solutions to hundreds of users, and they typically require performance tuning both in the hosting repository and in the SCM/ALM software. With CM+, for example, you can scale up from 1 to 2000 users without worrying about performance issues or the level of hardware you're using. Hardware improvements have reduced scalability issues over the years, but when you end up managing millions of revisions of things, it still makes a big difference. "Tagging" operations can be onerous on some systems, but fast as lightening on others. This has to do with the architecture of the solution, and of its underlying repository capabilities.

Most CM solutions rely on the use of relational databases, because that's the norm in the database world. But these are not well designed to support things like large objects (such as files), revisions of data, hierarchical data such as you find in org charts, source trees, bills of materials (BOMs), and Work Breakdown Structures (WBSs) and traceability links, especially when these are not one-to-one relationships.

This gets us to the second part of scalability, which is in regards to resource requirements. Obviously, if you can set-and-forget your repository, you'll need fewer staff to maintain it. And maintenance is a huge cost in some CM solutions. Many CM+ shops allocate zero persons to maintenance, as do we do here at Neuma. Scalability doesn't just mean that I can move from 10 to 100 to 1000 users, or from 10,000 to 100,000 to 1,000,000 files. It means that when I do that, I don't have to move from one-person support to 5 to 50 as I "scale" my solution. And unfortunately, that is the case with some second-generation solutions. It is also important that productivity and ease-of-use scale well with a growing user base and a growing code base. It's just not acceptable to have a slower system as more systems and files come on board.

The second area of the question was with respect to adaptation of the solution. Here we have a wider scattering of capabilities among tools. First of all, recognize that some tools


About the author

Upcoming Events

Jun 04
Jun 04
Jun 12
Apr 29