Deployment is the New Build

[article]

Whilst it's clear that the generation of customer features takes place within the development, testing and QA teams, the business value inherent in these features is only unlocked once the application is actually running in a target environment, accessible to users. And it's not just the final release to production that requires a deployment: every UAT, performance or integration test generally needs an application running in a "real" environment, not a developer's local machine. One test, one deployment.

Given the well-known costs and adverse effects of faulty software in production, the ability to raise quality, usability and performance through a greatly increased number of testing cycles represents significant added value in itself.

One of the consequences of this increased attention is that the importance of build, release and deployment professionals as deliverers of business value is being recognized more and more. The heightened focus also means, though, that many companies are realizing that the effectivity, simplicity and overhead of their release and deployment processes lag far behind that of build and continuous integration.

As a (still) active developer whose day job involves studying, automating and improving deployment processes across the industry and developing a vision for deployment into the future, the question of why deployment is so different from build in today's enterprises was bound to raise its head sooner or later. The resulting discussions and deliberations turned into a presentation at Devopsdays in Boston earlier this year.

What's in a word?
One of the challenges surrounding the discussion of deployment in relation to build, release, provisioning and other tasks in the ALM is the - decreasing, thankfully - lack of a clear shared definition for 'deployment'. Without wanting to promulgate this as the correct definition, for the context of this discussion I'd like to treat deployment as the process that

takes the components that make up a release (typically a specific version of an application) and getting them correctly set up in an infrastructure environment so that the release is accessible to (end) users

This would differentiate it from build and release in that it assumes that the application components have already been created, and from provisioning and other infrastructure tasks in that the target infrastructure is already assumed to be present.

On-demand virtual or cloud environments, or virtual appliances, put a bit of a different spin on the issue, and will be a topic for a another time.

Blast from the past
Taking the above as our working definition, the sobering picture is that, with very few exceptions, deployment now is pretty much at the stage where build was in the days of the make guru: a black box put together and operated by a specialist that somehow works. With luck, this precious resource is still employed and around to fix or extend things when required, but if he or she is out to lunch you're simply out of luck.

To be fair, there are quite a few places where there is least tooling or automation in place that tries to map out the sequence of steps or actions required to carry out a deployment, to at least bring some visibility and traceability and shine some light on the 'magic'. But this is still a long way away from the push-button experience that build is nowadays. Laboriously visualizing and walking through a process step-by-step is still a sign ultimately of a lack of trust; a truly mature process doesn't display its internals any more, it "just works". Much like build today.

The road to "just works"
Of course, build didn't start out as a "just works" process either, so what actually were the steps that made this transition possible? Looking at the evolution of Java build tooling over the past decade or so, there have been at least three main developments:

  1. Reusable commands

  2. 2. Models

  3. 3. Conventions++

Reusable commands
The first step, epitomised by Apache Ant, was the recognition that most of the low-level tasks or actions are the same whatever application is being built, whether calling a compiler, copying files or replacing placeholders.

Rather than copy-pasting the same OS commands into every new build script, we encapsulated these commands as libraries of reusable components that only needed to be written once.

Further, we discovered that certain patterns of step sequences would appear in many different builds. These 'chunks', such as constructing a classpath, or copying and processing static resources, evidently represented some higher-level build activity with a distinct function.

Models
Whilst the realisation that all the actions carried out in a build are basically a sequence of common chunks was an important start, the next big advance was brought about by recognising that we weren't just seeing repeated patterns of actions, but that the types of data these actions were working on were also shared.

This gave rise to the notion of a true domain model for application builds, with source sets, resource sets, modules, dependencies and so forth that were originally introduced by Maven and have featured and been reused in all build systems since. 

Combining the sequence of common chunks with the new domain model that structured the data being processed gave rise to notion of distinct phases, in which parts of the build model are generated, prepared and made available to subsequent commands.

In addition, Maven also supported the idea that the domain model, and thus the build phases, would have to be able to vary slightly to accommodate different types of Java artifacts that need to be delivered, such as JARs and EARs. This has subsequently been further developed to support builds of totally different technologies such as Adobe Flex applications.

Conventions++
An additional benefit of domain models for build was the ability to make use of default values in a structured way, for instance for the names of built artifacts or the location of resource files.

However, the flip side of this convenience, certainly in combination with XML as a descriptor language for builds, meant that deviating from these standards could be quite a challenge. Certainly if the aim was to extend the domain model in some way, or to support a language or technology whose build flow was reasonably different from Java's, such as building documentation bundles or virtual machine images, for instance.

This has led to a generation of build tools, such as Gradle, that aim to restore the developer to a position of full control in which arbitrary actions can easily be defined and organised into phases, tasks and entire builds. Of course, given how used we have become to the convenience of "it just works" in simple cases, these tools still support the full domain models and conventions of common technologies such as Java.

Who'd have thought?
Reviewing this progression from today's perspective, a couple of facts stand out that, certainly in comparison to other evolutions in IT, are quite surprising.

Firstly, whilst it's now hard to imagine specifying a dependency using anything other than the groupId:artifactId:version pattern, none of the models or conventions that developed were formalised in industry standards. Instead, they were either based on observations of common patterns, or simply clever or even somewhat arbitrary choices ('src/main/java', for instance).

Secondly, we have seen how ease-of-use based on conventions, coupled with a moderate nuisance factor of tweaking those conventions, can dramatically change user behaviour. Initially, for instance, many of those new to Maven spent quite a significant amount of time, and produced a fair amount of XML, to change the standard settings to match their own environment, naming conventions, file paths etc.

Pretty soon, though, and especially as the ratio of green field vs. legacy projects increased, it simply became easier to stick with Maven's standard values and be done with it. Today, these conventions have become so standardised that they are supported not just by Maven, but essentially all other build systems out there, too.

It's not just users' preferences that were "charmed" into adopting standard conventions, though. In many cases, company standards previously seen as cast-iron were progressively discarded or modified if they could easily not be accommodated by the emerging de facto standards. Ease-of-use was able to triumph against abstract rules.

And deployment?
So much for the build process. What about deployment, today's critical hurdle for automation in the business value delivery chain? As previously mentioned, the current industry average is somewhere between "black box" and "step sequence". In terms of the descriptions of the build process evolution, the most advanced deployment automation systems are somewhere just beyond "reusable commands", in other words.

Which naturally begs the question: how do we get to a push-button state? What do we need to do to be able to reach the maturity level of build today?

Looking at what we encounter in the industry today, three critical aspects will be:

  1. Develop a common model

  2. 2. (Re)discover vanilla

  3. 3. Support a "clean build"

Develop a common model
Before we can advance to the 'model' stage, we first...well...need a model. Thankfully, a very simple one can suffice: Packages, Environments and Deployments.

There's nothing particularly magical to this, and indeed the concepts are commonly found in all organisations. But giving these things explicit labels helps not just formalize the ideas and gives developers and vendors something to support. It also creates a shared vocabulary and language around deployment, which is the first step to shared understanding and reusable functionality.

Indeed, the concepts are so basic that there does not appear to be much to say about them. Packages capture the components of the versioned item to be released, both artifacts represented by actual files as well as configuration, resource settings and metadata.
In accordance with release management best practice, packages should be stored in a DSL and should be independent of the target environment, so that you have one "gold standard" package running in Development, Test, QA and Production.
Packages also mean that we can version everything, not just the application binaries but also the related configuration and environment settings.

Development and Test just mentioned are examples of Environments, simply collections of infrastructure - physical, virtual, long-running, on-demand, whatever - that applications run in as they progress through the ALM cycle, potentially with approvals or other checkpoints governing the transition from one to the next.

Deployment, then, is perhaps the one concept not immediately widely understood. A Deployment represents not just the activity of getting a Package running in a certain Environment, with a start and stop time, executing user, status and so forth.
Rather, a Deployment also documents the way in which the Package's components have been deployed and, if applicable, customized. For instance, a Deployment will record that a certain EAR file in the package has been deployed to the following set of target servers or cluster, or that the data source password for this specific environment has been customized and set to a new value.

Recording this information is critical because it is very hard to be able to intelligently and correctly adapt an application's state - when upgrading to a new version, for instance, or adding new servers to the target cluster - if you do not know where the application is currently running.

(Re)discover vanilla
If we are going to achieve hassle-free, push-button deployments, another thing we will have to reconsider is whether we really need to tweak and customize our infrastructure in every way possible. Indeed, some companies seem to almost have a policy that any setting that might be a default should be regarded with suspicion and, preferably, changed.
Much as custom project layouts made setting up a build unnecessarily tedious and complicated in a convention- and model-based system, stubbornly refusing to go with infrastructure defaults will make it harder to get hassle-free deployments that truly cover all the steps required.

Sticking with defaults not only encourages reusability because the chances are much higher that a solution developed for a different scenario will also work in yours. It also improves maintainability and cuts down on the risk of "ripple" changes, where a custom value in the setting for the servers hosting application X requires further changes to the setup of application Y etc.

Support a "clean build"
When building a large project, we try to cut down on the time taken by recompiling only the source code that has been modifying. When deploying applications, we similarly want to save time when upgrading to a new version, especially when this time represents production downtime.

However, we also know that, eventually, some parts of any incremental build will end up going out of sync, causing strange compilation problems, or features or fixes not appearing when they should.

What do we do in such a case? Do we laboriously try to track down the files that are out of sync and rebuild piece by piece? No, we simply run a clean build to start from scratch, because in 99% of cases it's much quicker to simply rebuild than try to track down the cause of the problem.

In deployment-land, we seldom have the ability to clean build, and this is one of the main causes for the stressful, time- and resource-consuming troubleshooting hunts that are still far too common. Of course, in order to clean build a system we need full versioning of the environment, its configuration and the applications deployed to it. Virtual appliances and virtualization solutions with snapshot capabilities will have a major role to play here.

We also need a known state for durable resources such as databases, which remains challenging but is being addressed by a growing number of products out there.

Push button deployments
Taking stock, it's clear that there is still some way to go. We're slowly developing a common model, but both "(Re)discovering vanilla" and "Supporting a "clean build" are visions not quite yet on the horizon of most large companies.

In fact, it's not so much technological advances that are required - many startups are pretty close to push-button deployments and continuous delivery. Indeed, the "poster children" of this movement already have setups where every commit can pass through an entire regression, integration and performance testing suite and potentially go straight to production.

No, the important hurdles to be taken are procedural and mental, changing rusty ways of working and entrenched mindsets.

For those that can make it, though, the benefits in terms of accelerated business value will be a game changer. And build and release management professional such as yourself will be a key part of this change!

About the Author
Andrew Phillips, VP of Product Development, XebiaLabs, joined the company in March of 2009 where he is responsible for the development of XebiaLabs’ deployment automation product, Deployit. He is a regular industry contributor and often speaks and writes on topics surrounding release management and deployment automation.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.