Build Management Essentials You Need to Know

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

It used to be that reproducibility was the holy grail of build process and capabilities. While that is still the central requirement, good build management processes and tools can take you a lot further, a lot faster, and with better quality. The steps are the same: identify a build, select the updates (i.e., change packages) that are going into the build, create the build definition in the CM repository, and then click a magic button that causes the build to be built. Done.

It doesn't really stop there.  That build, or a subsequent one, has to make it out to production.  That means there's going to be test cases run against it.  Tech writers need to know what exactly is in the build so that they can document it.  Product managers likewise need to know that the build has all of the required features and fixes, and that it of sufficient quality.  Developers just want to test their own changes against the new build (which they probably created by themselves for themselves), so that they can correct them and repeat the process.

If you think deeply about a build, it's not just a set of executables/deliverables.  There's an entire history of how it got there and a whole story about what's in the build.

What Does “Build” Mean?
The term "build" can take on more than one meaning, all around the same concept.  Prior to a build taking place, there's the concept of build that means build notice.  Typically a build notice has a specific time in mind (possibly automated each day, hour, week, etc.) but may have either a rule-based (in the case of automation) or a manually specified content description, that is, what is going into the build.   The build Notice, prior to the build is often referred to as the "build".  In any next generation ALM tool, the current content definition should be a click or two away.

When a build has been completed, the record of what went into the build, the build record is also referred to as the build.  Because the context makes it clear that we're talking about a build in the past, the sense of the word deals with ensuring we have a record of what actually happened.  In a next generation ALM tool, the build record itself may change over time.  This is, perhaps, a scary thought if you've not been exposed to this, so we'll mention it in more detail below.

The build operation, that is, the compiling, linking, packaging, etc., is yet another meaning often inferred by the word build.  Often you'll hear, "That broke the build," meaning that the build operation was broken so that it couldn't be completed.

The fourth use of the word build is to refer to the artifacts produced by the build.  These are the executables, help files, or more generally, deliverables, in whatever form the build leaves them.  It's common to hear:  "What build were you using?"  To summarize, we have at least 4 common meanings for the term build:

1. Build notice:  a build in the future
2. Build record:  the record of what went into the build
3. Build operation: the procedure used to generate the build artifacts
4. Build artifacts:  the deliverables produced by the build operation.

With all of these meanings, you'd think that there would be a lot of confusion.  In context, the meaning is almost always clear.  That, and the fact that we're really talking about the same thing, but just from different view points.

Build Process
There are many, many builds performed in today's typical development environment.  There are production builds, verification builds, integration builds, sanity builds, and probably the most frequent of all, developer builds.  The adjective defines the purpose of the build.  Developer builds are used to verify and debug their changes.  Production builds are used to create artifacts ready for production delivery. And so on.

Builds are often automated.  For example, the developer clicks on build icon/menu item to integrate and re-test his/her changes.  Sanity builds are often done automatically each night to ensure the quality of the changes that have been checked in to the repository and are ready for integration.  The term "nightly build" is used as a standing build notice.

So we have a hundred developers, each doing a dozen or two builds a day.  Do we want to record each of these in the repository?  I don't think so. These builds are part of the edit, build, test cycle that typically has very minute increments.  Imagine if the developer's work was saved in the repository for each developer build.  We'd have volumes of data that nobody really cares about.  So formal Build Records typically are not kept for each developer build, though the full results of the latest build are likely part of the developer's workspace.

When we move to system builds, it becomes much more important to track every build, or at least nearly so.  Consider production builds.  You wouldn't think of (I hope) creating a production build without a complete record of what went into it, a record that will persist for all time.  Your process for a sanity build might be the same as well.  However, you might just as well, record only "successful" sanity builds.

In our shop, if a build doesn't complete properly, we address the issue with additional changes, and add those changes to the build notice.  We then throw out the unsuccessful build artifacts and restart with the new definition.  So if it takes 4 or 5 tries to get a successful build out, rather than keeping all of the unsuccessful builds around, we allow the build record to be modified and the build repeated - the same identifier, but with modified contents.  This is sort of the same thing the developer does with file changes - makes some changes, tests them.  If they don't work, makes some more changes, tests them.  Finally, the source code is ready to check in.  All the in-between changes are just noise that does not need to be recorded.

This can be good or bad.  If you raise problem reports that can be used to track the sequence of getting a successful build, you still have an audit trail.  If you are just trying different things to get the build to work, but have no real record of the sequence, then perhaps you're losing valuable information.  For example, if it's normal for your successful builds to take multiple tries, there's probably a root cause that can be addressed.  If all the repository shows is the successes, then what justification will you have with management for spending the resources to track down the root cause and change the process?

Build Versus Baseline
So this brings up a couple of questions.  How do we know if the build definition was successful?   How do we know that the build definition is not going to change again?  

First of all we should point out that we do not create a "Baseline" for each build - though I know many who do.  A Baseline is used for measuring change.  That's where the word comes from.  Yes, I can create a build based on a baseline definition.  However, I will likely need to create multiple builds from that baseline definition (English, Spanish and French versions, for example).

This comes back to our saying that one should manage the superset, build the subset.  Or put another way,  manage the baseline, build the Variants.  A baseline should reflect all of the possible combinations of optional components.  But we want our builds to be very specific, perhaps for a single customer.  Perhaps we find 3 or 4 critical problems on our way to completing integration testing.  Rather than create a new baseline each time, we define each build in terms of the existing baseline plus a number of change packages, which in our shop/tool we refer to as updates.

A baseline is a frozen configuration of all content revisions.  It never changes. As such it can be used as a reference point to measure change.   A build, on the other hand, is a record of what was used to create a specific set of deliverables.  It should include:

1.  The build procedures/tools used
2.  The baseline on which it was based
3.  Options, subsets specifications, variants used to "subset" the baseline
4.  Additional updates used to augment the baseline
5.  State information.

Usually, the build notice becomes the build record when the build operation has been completed.

So now, I can create a variety of builds based on the baseline.  Some may be customer specific.  Some may contain only a subset of deliverables.  Some may simply be used to increase build quality prior to promoting the build to the next functional group.  But now I have a reference point, the baseline and a manageable set of configurations of/additions to the Baseline that can be used to define each build, rather than an enumeration of thousands of "source" revisions.

Build State Flow
Now let's go back to our questions at the top of the last section.  A baseline is a frozen configuration.  A build definition is potentially changing until we get a "successful" build.  So how do we know when and what that is.  The answer is that we use a build state flow.  The Build isn't a static snapshot object, like a baseline.  Instead, it evolves over time, even after the artifacts are produced.  For example, once the build artifacts have been verified, we want the build record to reflect this.

In the same way that Updates (i.e. change packages) need state flow to indicate that they are In-Progress, Checked-In, Ready-for-the-Build, Successfully-Integrated, etc., Builds need state flow.  A typical Build state flow might look something like this:

  • Original:  Original build notice created.  Build content not yet defined.
  • Selected: Baseline and updates (and options, if any) selected to define the build contents. Build operation is in progress.
  • System integration tested: Build has been successfully integration and sanity tested
  • System verification tested: Build has been successfully tested by the system verification team
  • Field: Build has been deployed in the field for field testing
  • Production: Build is now used as the production-level build for the specified product and release

This might reflect a state flow that looks something like the following:

Now that we have states associated with a build, we can define properties that go along with those states.  These will reflect the policies of the product development team.  So, for example, we might have policies that say:

1. Once a build has successfully completed system verification testing, it's definition is frozen.  CM/ALM tools are configured such that they will not support changes to the Build definition once it's status reaches the "svtest" state.

2. If a change is made to the build definition in the "sitest" state, the status is rolled back to "select".  This might be the case if an urgent bug fix was needed to achieve build success.

3. If a publicly defined build notice has been cancelled, the status is set to obsolete.

4. A build can end up in any state.  For example, if a build fails verification, it is not necessary to roll back the build from sitest state to the select state repetitively adding updates until it passes verification.  A subsequent build can be defined to hold updates for the next attempt.  Only those builds that are selected for production will go through all of the promotion levels (at least as shown in our state flow above).

This is, perhaps, a fairly simple set of policies to define how builds are treated.  Yours may be simpler or more demanding.  But it is crucial that the CM or ALM tools used support these policies and do not permit inconsistencies.  So the CM/ALM tool should not permit, in our example, a build to be rolled back after it has attained "svtest" or higher status.

Builds and Context
A record of a build is important.  It's critical to be able to, at any time in the future, recreate the Build Artifacts from the Build Record in the CM/ALM Repository.  Typically, a CM/ALM tool has a means to retrieve all of the "source" corresponding to the record so that the build can be reproduced.

However, a build record should be of much greater benefit in a next generation CM/ALM tool.  You should be able to select any build record and place your CM/ALM tool session into the context of that build.  Now if you look at any file, the revision corresponding to the Build Record is used.

Furthermore, you should be able to ask questions such as: Is this update in the build?  Did this problem get fixed in the Build?  Is this feature in the Build?  What test cases failed for this Build?

One of the more general capabilities of a build, or a build context, is to be able to use it to compare against something else.  For example, you may wish to compare the contents of your workspace against a specific build record to verify that the workspace contains the same code, or perhaps to identify the differences.

Build Comparisons
There are much more significant build comparisons that should be a key component of every shop.  And next generation tools are necessary to realize them.  For example, I have a series of builds that work successfully, and then all of a sudden I find a big problem in one.  I need to find out where the problem was introduced.

Step 1: What build was it first introduced in? 
This is answered by testing the build artifacts.  Build reporting can help in several ways.  First if there is a specific test case for the failed feature, it can identify when that feature last passed successful testing.  That leaves a series of say, a dozen potential builds where the problem could have been introduced.  The build records could then be used to test halfway back, to narrow the search.  This binary search for the bad build could save a lot of test time, especially if it takes significant time or resources to set up the correct build and test.

In some cases, it may be desireable, in parallel to do a build comparison between the current build and the one a dozen builds back.  At the source code level, this would be tedious to look through.  But if the comparison shows the list of features addressed, problems fixed and updates made between the two builds, it may be possible to zoom in on one or two suspect updates right away.  If it's too much effort to get this comparison information, and to zoom in on the details quickly, this option will rarely be used.  But if you have a Build Comparison Dashboard that rapidly shows you this data in an interactive drill-down capable fashion, it would become natural to make this your first step,

In our shop, that's how we proceed.  If we have a problem, we identify a build when we did not have the problem, and bring up a build comparison dashboard.   We identify suspect changes and more than half the time we can zero in on the cause of the problem without any further testing.  In other cases, we do the binary search through the builds performed between the successful and the current (failed) build to narrow down the build.  In either case, we end up looking for the offending Update.  But if the binary search/test is done first, we have a smaller set of updates to inspect.

Step 2.  Which Update introduced the problem?
So now we have two builds:  one it works in and one it doesn't.   If we've done binary search/test, these are adjacent builds, otherwise, there are a few builds in-between.  We bring up a dashboard and first we look at the set of features addressed between the two builds.  Any suspects?  If so, we zoom in for more details and mark the suspects.  Similarly, we look at problems that have been fixed between the two builds, and mark any suspects.  We can go directly from our suspect lists to the set of updates used to implement features or fix problems.  Or we can bring up a separate list of Updates between the two builds and inspect them, first by title, then by description, and if necessary by performing delta operations.

Having narrowed down our search to two adjacent builds, we can usually identify one or two suspect updates that caused the problem.  However, sometimes we just can't.  So we then take the list of updates that went into the failed build from the successful build, and we create a build with the first half of those updates.  We test, and depending on the results, we either include half of the remaining updates (success case) or leave out half of the originally selected updates, and we do another build and test.  When we're down to just a handful of updates, we can usually zoom in to the offending one by inspecting the deltas.

We take this process for granted in our shop.  But we also realize that there are so many places that struggle to do these steps because they don't have the tools necessary to identify the sequence of builds that were done, or the sequence of Updates (change packages, changesets, commits, or whatever they call them) between two builds.  A build dashboard (see below for an example) should be a one-click capability that automatically compares the previous two builds, and then allows you to select alternate builds for comparison if necessary.  It should come up with a list of features, problems, updates, and perhaps other data, from which you can drill-down for details or against which you can right-click and ask:  Which updates were used to implement this feature?  What is the source code delta for this Update?   You get the picture.

There are plenty of other navigation capabilities in most next generation dashboards.  If they are designed for a particular task or role, such as we have in the example above, it can make life a lot simpler.  No fumbling with commands or menus - just right to the point and zoom-in as necessary.  This same type of build comparison dashboard is invaluable to product managers who want to know what features and fixes have gone into the latest few builds.

Build and Change Query/Navigation
Builds and Updates are perhaps the most central records of a CM database.  They each have various links going back in time (cause) and forward in time (results).  The update comes from assigned work items that resulted from requirements that came from customer (problem and feature) requests.  They produce new revisions of files that are incorporated into builds.  Similarly, builds come from a series of updates (with their traceability chain), usually defined in terms of a previous build, identifying updates to a baseline.  They spawn built artifacts, test sessions (a sequence of test case runs and their results), customer site deliveries, etc.  They are referenced by customer requests (especially for problems - which build was that found in?).  

Your second generation tool might be great at creating these relationships and storing data against them.  A next generation tool will populate most of these relationships based on the actions performed in managing updates and builds.  If data is not captured automatically when it can be, it will be lower quality data.  But even if the data is captured perfectly, if the navigation tools aren't there, or if they have slow response, or require too many clicks, the data won't be used nearly as often as it should be.  This in turn will result in lower quality and longer delays.  Your next generation build capabilities must provide rapid, easy navigation, eliminating the need for tool training, allowing you to focus on process training.

Meeting Support
Your query and navigation capabilities should be more than sufficient to drive your meetings.  Maybe you have test session reviews where you review which test cases failed, and which are still to be run.  Maybe you have quality reviews to ensure that Updates have been properly peer reviewed and coding standards have been followed.  Next generation CM/ALM tools must be used to drive such meetings and to capture in-meeting results so that there is no lag built into the process.  I might recommend that you have a dashboard/work-station specific to each meeting with all of the information and all of the actions needed to drive your meeting to completion.

For example, your Release Meetings might present a list of outstanding problems and features that were originally targeted for the release.  They might also show the failed test cases, with the ability to zoom in to details.  And perhaps things such as burn down charts, risk items, and a comparison between the previous release quality march (i.e. sequence of builds and their quality) and the current one.

Recurring Theme
If you've been following my columns, this will come to you as no surprise.  If you want to support a next generation process, your tools must be able to adapt to your requirements, readily and easily.  

Your Build process, like any other process, will evolve.  If your tools are static or provide limited support for the process, your process will quickly follow suit.  So much for continuous improvement! If your data, your process, dashboards, UI, and guidance can evolve as you go, however, you'll become more and more competitive in your ability to deliver products to market.

In the past, it was believed that tools follow your process. In the next generation, you'll see more and more of the idea that tools help to define and support your process.  Whereas normally you wouldn't consider buying tools without first ensuring that them can support your process, true next generation tools are, by definition, able to support virtually any process.  And the winners will be those that actually help you to advance your process beyond your expectations.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.