The Practice of Good Release Management Processes in CM

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

Summary:
We build software as part of a system or as its own entire product. The goal is to meet the requirements established by the customer, the market and/or the cost/benefits analysis. Product releases are meant to move us from some starting point to our ultimate product over a period of time: months, years or even decades. Release management starts not with the delivery of software, but with the identification of what we're planning to put into the product. The timing and content of releases helps us to manage releases so that they are not too onerous on the customer and so that we stay in a competitive position with our products. Good release management processes will ensure that you know what is going to go into your product, what actually went into the product, and what changes the customer is going to realize upon upgrading.

Planning and Tracking the Release

Release management begins with the identification of what is being developed for a release. You may have an Agile development team or a more traditional model. Either way, you need to plan what's going into your releases. The individual features or problem fixes must be identified and tracked.  They must be linked to the requirements and requests that were responsible for them, and then referenced from the changes that implement them.

With Agile development, planning is in the form of identification of features/problems and prioritization of these. Every week or two, revisit your to-do lists and adjust priorities. Your agile development is trying to keep your development team focused for the next couple of weeks. Your release management deals with what will be released 3 to 12 months from now and in the releases that follow.

For successful release management, your efforts must carefully be traced to the features that are completed.  Last minute specification changes, typical in a fast-feedback agile iteration, must be adequately captured along the way.   In the end, you will have a product, but you need to know what is in the product and what is not.  If your ALM tools don't allow you to accurately track this along the way, you're agile gains will be lost to additional delays and inaccuracies in your release process.  If the ALM tools are intrusive, they'll interfere with your lean operation.

In a more traditional schedule-based development environment, release planning is done as part of the requirements specification effort. Requirements are identified and ranked by priority/weight.  As the customer requirements are turned into a Functional, an initial feature-by-feature effort estimate allows you to plan your time frames.  Here again, as the plan is executed, it's critical that the actual development changes reference the features and problems being addressed.  Your CM/ALM tools, along with your peer review process, can ensure that this happens.

One Release per Customer?
Early on in a product's lifetime, there is a tendency to customize each release to a specific customer's requirements.  This is a good thing, but if it's not handled properly, as I've seen time and time again, you end up managing multiple releases, one per customer, instead of managing your product.

It's important to recognize that the development team and the product design are crucial release management factors.  Customization of releases is always going to give a competitive edge. However, the goal is to customize at the customer's site, not in the development shop.  If your developers are creating custom builds for your customers, you will rapidly discover that your resource requirements grow linearly with the number of customers you have.  Additionally, product complexity grows exponentially.

The development team will invariably have some experience on board.  Explain to them the requirement that customizations will need to be done post-delivery, at the customer's site.  There is actually an entire range of customization capabilities that will need to be delivered.  Some will need to be done prior to
delivery.  For example, platform-specific builds must be created before delivery. The development team needs to be in the habit of identifying which customizations can be held off as late as possible, and designing the software that way.  Consider whether the customization is:

  • A coding time customization
  • A build-time customization
  • An installation-time customization
  • An initialization-time customization
  • A run-time customization

Your team must have the goal of moving customizations as far down the chain as possible.  Each level that you are able to move down the chain will save significantly on your build, release administration, and complexity.  Eager programmers might say that they have a nifty way of managing conditional compilations to put the right combination of features in place.  An astute product manager will insist, instead, that the feature selection will be done at run-time, based on the license keys that are currently installed at the customer site, or on the contents of a feature specification data file.

Product design is crucial here.  From the outset, you must have a way of enabling and disabling features at run-time if you have any sort of user interface to your software.  Design a mechanism that will allow you to select feature configurations.  This is not difficult to do.  It can be a command line capability, a check-box capability, or perhaps a table in your product's database (for which there is, presumably, a user interface).  Once it's there, your development team simply needs to be told:  check the feature checklist and disable this feature if it's not active.  From a programmer's perspective, this is a simple task, and it helps designers to focus on a more structured architecture.  The key is designing this capability into your product up front.

Manage the Superset, Deliver Subsets
I've gone into companies where they have 18 different builds because they have 3 variants of 2 different sizes with 3 optional components.  Because there was no design initiative, it took a different build for each combination.  It was easy for the designers to push the requirements downstream, but they began complaining that builds weren't being turned around fast enough.  To move this organization to a single build solution took less than a month.

A colleague of mine has more recently told me of another project where there are 200 customers, with virtually every one having it's own release definition.  They're scrambling for help as their sales continue to climb.  Don't start out on the wrong foot because you'll tend to delay the move to the right foot until you seriously impact your organization.

The key, once your feature selection architecture is in place, is to manage the product superset, and build and deliver subsets.  If you have different platforms, make sure that your configuration management is done on the superset.  This again requires support from design.  Platform specific items need to go into separate files that can be included in the appropriate builds based on the build request. Don't create a nightmare for yourself by forcing the same name on each of the objects. 

Label them according to their platform:  SolarisDefs.h, LinuxDefs.h, WindowsDefs.h, etc.  Then dynamically create a single file that includes the appropriate file based on the build options.  In this way, the CM remains simple and you won't have different branches per release, per platform, per option, etc. The files are managed just like all the others.

Perhaps you have optional features that are to be packaged, such as language tables or documentation. First ask the question: Why can't we deliver all of them all of the time?  If the answer really is that you can't, even with your run-time feature checking, (e.g., security reasons), package the optional features into separate files (again named for their options and not variants of the same file) and tag those files using your CM system.  From a CM perspective, you'll manage all of them as components of your product.  From a build perspective, you can specify the tags you need to select the appropriate subsets.

This is really not rocket science, but I continue to be amazed at how much administration an organization is willing to accept as compared to putting 1% of the cost into doing it the right way.  That administration effort extends from development, through CM and into release management and ultimately customer management. 

Are We Ready To Release?
Now that you're doing everything right, how do you know if you're ready to release the product?  You need to track the builds that are sent to verification, tracking the problems that result from verification, and validating the set of tests being run by verification.

Test cases need to be linked to the features/requirements they are addressing.  Your ALM tools should be able to tell you in a single click what requirements don't have test case coverage or what requirements are covered by a set of test cases.  You should be able to select a particular build, ask what verification sessions have been run against it and ask for the results:  what/how many problems were raised? How many test cases failed?  What percentage of test cases were run?   Your ALM tool should be able to provide you with this picture for each of the verification builds, so that you can see the progress from build to build.  If you compare this progress curve from one release to another, you'll notice similarities, with the greatest variance due to changes in your process and methods.  You'll be able to predict when this release will reach the quality required, based on this curve.

Generic results such as this are helpful, but there are two more things you need to do before being able to release your product.  And your ALM tools must be front and center here once again.   First of all, you need your CRB to be analyzing problem reports coming in and identifying which problems must be fixed prior to release. 

One approach is to try to fix them all.  And that's OK, as long as you realize that fixing them all is going to introduce additional problems that may take you a few months to uncover.  The "fix them all" approach is best done at the beginning of a release cycle so that the side effects can be discovered before you release.  Closer to release date, you need to be very specific about which problems you fix.  I've seen some very, very trivial looking issues cause great problems after being fixed incorrectly - even when the fix was reviewed and appeared simple.   

I recommend you get into a habit of planning for a service pack release following your initial release, and placing all non-critical problems into that service pack, rather than trying to address everything prior to release. We're talking software here, as the same does not apply to hardware, or at least the weights and balances are different.)  Often, the list of must-fix problems is referred to as the gating problems (i.e., gating the release.

The second thing you need to do is to get the product into your customer hands. Plan alpha and beta releases.  Give away the software if you have to, but make sure plain ordinary every-day users are going to exercise the product.  You will never be able to test all scenarios.  If you think so, consider why NASA, with all of its tight development, review and verification processes, still hits problems in flight, or on the launch pad.  It's not because of process problems.  It's because they have a finite window and budget to complete a task, just like every development project has. It is also because the test environment is different from the specific user environments.  Getting the product into the user's hands is the real way to evaluate the readiness of a release.  It's a key part of release management.  Track the issues found specifically against field trials. You'll likely find that users rarely hit the problems your test cases are there to catch.  It's usually some more obscure case that never made it as a test scenario, at least not in the same run-time environment.  Develop the same progress curve for field trials as you did for verification.  Build confidence that you're ready to release.

What Are We Delivering: Traceability
Now that you're ready to release, you need to tell the industry what features you're releasing with this release.  If your ALM tools are adequate, this should be a push-button task, at least down to the details.  The formatting may require a technical writer or graphics designer, but having an accurate list of what has gone into the release comes from knowing what build you're releasing, how was that build built and from what source, by tracing your build back to the changes that were made.  It is helpful to also have it peer reviewed to ensure that the changes covered exactly what they said they would in the form of physical audits.  This is done by tracing the changes back to the specifications and by verifying that your product conforms to these specs through a functional audit.  This is done basically, for software, as a verification run record.

Some ALM tools span several databases and building the complete picture may be non-trivial and even error prone.  That's not good.  Often this results from gluing things together yourselves, but without the experience that shows you where the glue doesn't really hold up.  This is where you really find out if your processes are bullet proof and simple enough to completely and correctly capture the data. If not, you'll find out from one or more of your customers.

Your tools need to support you in packaging your release, whether it's done over the Internet, directly to your in-house customer base, or placed on a DVD.  Your tools need to ensure that what you intend to deliver gets properly packaged and delivered.

What Does the Customer Have?
It's one thing to tell the customer what's in a release.  It's another thing to tell the customer what changes they will see.  Perhaps they're using an older release or a specific service pack level. Maybe you've done custom builds for the customer to get them the features early or to fix a pile of urgent problems.

Every build that goes out the door needs to be tracked in your ALM tool. Furthermore, you need to know which build (or builds) every customer is using.  It may be fine to have some basic rules, but then you have to at least track both the rules and the exceptions.  When you deliver to your customer you need to say:

  • This is what you currently have
  • This is what we're delivering to you
  • Here are the requests you've asked for, by problem and by feature request
  • Here are the requests we're satisfying
  • Here are the requests we're not satisfying and their current disposition
  • Here is how to upgrade from where you are to the new release (ideally, this is fully automated but that's not always realistic)
  • Here is the incremental training that will most benefit your users

By doing this, you'll gain customer confidence.  This will give you reference customers and that will increase your sales.  Make sure you can go to your customer site and identify exactly what they have installed:  Is it one release or several?  Are there old releases or new? What variants do you see?  Don't trust that what you sent them is what they're using.  Your product should be able to report it's exact contents, preferably in terms of one or more build identifiers that you inserted into the deliverables, which you can use to trace exactly which lines of code they have.

Don't fall into the trap of thinking release management comes at the end of development.  It starts before development does and it persists after delivery.  Think about it up front and you'll design your development processes appropriately.  You won't be scrambling to address the complexities of an ad-hoc release process. 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.